3,650 research outputs found

    Closed-loop autonomous docking system

    Get PDF
    An autonomous docking system is provided which produces commands for the steering and propulsion system of a chase vehicle used in the docking of that chase vehicle with a target vehicle. The docking system comprises a passive optical target affixed to the target vehicle and comprising three reflective areas including a central area mounted on a short post, and tracking sensor and process controller apparatus carried by the chase vehicle. The latter apparatus comprises a laser diode array for illuminating the target so as to cause light to be reflected from the reflective areas of the target; a sensor for detecting the light reflected from the target and for producing an electrical output signal in accordance with an image of the reflected light; a signal processor for processing the electrical output signal in accordance with an image of the reflected light; a signal processor for processing the electrical output signal and for producing, based thereon, output signals relating to the relative range, roll, pitch, yaw, azimuth, and elevation of the chase and target vehicles; and a docking process controller, responsive to the output signals produced by the signal processor, for producing command signals for controlling the steering and propulsion system of the chase vehicle

    Video guidance sensor for autonomous capture

    Get PDF
    A video-based sensor has been developed specifically for the close-range maneuvering required in the last phase of autonomous rendezvous and capture. The system is a combination of target and sensor, with the target being a modified version of the standard target used by the astronauts with the Remote Manipulator System (RMS). The system, as currently configured, works well for autonomous docking maneuvers from approximately forty feet in to soft-docking and capture. The sensor was developed specifically to track and calculate its position and attitude relative to a target consisting of three retro-reflective spots, equally spaced, with the center spot being on a pole. This target configuration was chosen for its sensitivity to small amounts of relative pitch and yaw and because it could be used with a small modification to the standard RMS target already in use by NASA

    Solar-Powered Cooler and Heater for an Automobile Interior

    Get PDF
    The apparatus would include a solar photovoltaic panel mounted on the roof and a panellike assembly mounted in a window opening. The window-mounted assembly would include a stack of thermoelectric devices sandwiched between two heat sinks. A fan would circulate interior air over one heat sink. Another fan would circulate exterior air over the other heat sink. The fans and the thermoelectric devices would be powered by the solar photovoltaic panel. By means of a double-pole, double-throw switch, the panel voltage fed to the thermoelectric stack would be set to the desired polarity: For cooling operation, the chosen polarity would be one in which the thermoelectric devices transport heat from the inside heat sink to the outside one; for heating operation, the opposite polarity would be chosen. Because thermoelectric devices are more efficient in heating than in cooling, this apparatus would be more effective as a heater than as a cooler. However, if the apparatus were to include means to circulate air between the outside and the inside without opening the windows, then its effectiveness as a cooler in a hot, sunny location would be increased

    Six-Message Electromechanical Display System

    Get PDF
    A proposed electromechanical display system would be capable of presenting as many as six distinct messages. In the proposed system, each display element would include a cylinder having a regular hexagonal cross section

    Fiber Coupled Laser Diodes with Even Illumination Pattern

    Get PDF
    An optical fiber for evenly illuminating a target. The optical fiber is coupled to a laser emitting diode and receives laser light. The la ser light travels through the fiber optic and exits at an exit end. T he exit end has a diffractive optical pattern formed thereon via etch ing, molding or cutting, to reduce the Gaussian profile present in co nventional fiber optic cables The reduction of the Gaussian provides an even illumination from the fiber optic cable

    Autoguidance video sensor for docking

    Get PDF
    The Automated Rendezvous and Docking system (ARAD) is composed of two parts. The first part is the sensor which consists of a video camera ringed with two wavelengths of laser diode. The second part is a standard Remote Manipulator System (RMS) target used on the Orbiter that has been modified with three circular pieces of retro-reflective tape covered by optical filters which correspond to one of the wavelengths of laser diode. The sensor is on the chase vehicle and the target is on the target vehicle. The ARAD system works by pulsing one wavelength laser diodes and taking a picture. Then the second wavelength laser diodes are pulsed and a second picture is taken. One picture is subtracted from the other and the resultant picture is thresholded. All adjacent pixels above threshold are blobbed together (X and Y centroids calculated). All blob centroids are checked to recognize the target out of noise. Then the three target spots are windowed and tracked. The three target spot centroids are used to evaluate the roll, yaw, pitch, range, azimuth, and elevation. From that a guidance routine can guide the chase vehicle to dock with the target vehicle with the correct orientation

    Multi-sensor Testing for Automated Rendezvous and Docking

    Get PDF
    During the past two years, many sensors have been tested in an open-loop fashion in the Marshall Space Flight Center (MSFC) Flight Robotics Laboratory (FRL) to both determine their suitability for use in Automated Rendezvous and Docking (AR&D) systems and to ensure the test facility is prepared for future multi-sensor testing. The primary focus of this work was in support of the CEV AR&D system, because the AR&D sensor technology area was identified as one of the top risks in the program. In 2006, four different sensors were tested individually or in a pair in the MSFC FRL. In 2007, four sensors, two each of two different types, were tested simultaneously. In each set of tests, the target was moved through a series of pre-planned trajectories while the sensor tracked it. In addition, a laser tracker "truth" sensor also measured the target motion. The tests demonstrated the functionality of testing four sensors simultaneously as well as the capabilities (both good and bad) of all of the different sensors tested. This paper outlines the test setup and conditions, briefly describes the facility, summarizes the earlier results of the individual sensor tests, and describes in some detail the results of the four-sensor testing. Post-test analysis includes data fusion by minimum variance estimation and sequential Kalman filtering. This Sensor Technology Project work was funded by NASA's Exploration Technology Development Program

    The Next Generation Advanced Video Guidance Sensor: Flight Heritage and Current Development

    Get PDF
    The Next Generation Advanced Video Guidance Sensor (NGAVGS) is the latest in a line of sensors that have flown four times in the last 10 years. The NGAVGS has been under development for the last two years as a long-range proximity operations and docking sensor for use in an Automated Rendezvous and Docking (AR&D) system. The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. That flight proved that the United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport Systems (COTS) Automated Rendezvous and Docking (AR&D). NASA video sensors have worked well in the past: the AVGS used on the Demonstration of Autonomous Rendezvous Technology (DART) mission operated successfully in "spot mode" out to 2 km, and the first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. This paper presents the flight heritage and results of the sensor technology, some hardware trades for the current sensor, and discusses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It also discusses approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, the testing of the various NGAVGS development units will be discussed along with the use of the NGAVGS as a proximity operations and docking sensor

    POSE Algorithms for Automated Docking

    Get PDF
    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data
    corecore